-
Couldn't load subscription status.
- Fork 13.4k
CUDA: fix logic for clearing padding with -ngl 0 #13320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA: fix logic for clearing padding with -ngl 0 #13320
Conversation
ggml/src/ggml-cuda/mmq.cu
Outdated
| const size_t size_data = ggml_nbytes(src0); | ||
| const size_t size_alloc = ggml_backend_buffer_get_alloc_size(src0->buffer, src0); | ||
| if (size_alloc > size_data) { | ||
| CUDA_CHECK(cudaMemsetAsync((char *) src0->data + size_data, 0, size_alloc - size_data)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is missing the stream parameter.
ggml/src/ggml-cuda/mmvq.cu
Outdated
| const size_t size_data = ggml_nbytes(src0); | ||
| const size_t size_alloc = ggml_backend_buffer_get_alloc_size(src0->buffer, src0); | ||
| if (size_alloc > size_data) { | ||
| CUDA_CHECK(cudaMemsetAsync((char *) src0->data + size_data, 0, size_alloc - size_data)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also here.
4634789 to
108dfde
Compare
108dfde to
ac78a42
Compare
|
@JohannesGaessler |
* origin/master: (27 commits) llama : fix build_ffn without gate (ggml-org#13336) CUDA: fix bad asserts for partial offload (ggml-org#13337) convert : qwen2/3moe : set yarn metadata if present (ggml-org#13331) CUDA: fix --split-mode row for MMQ (ggml-org#13323) gguf-py : avoid requiring pyside6 for other scripts (ggml-org#13036) CUDA: fix logic for clearing padding with -ngl 0 (ggml-org#13320) sampling : Integrate Top-nσ into main sampling chain (and add it to the server) (ggml-org#13264) server : Webui - change setText command from parent window to also send the message. (ggml-org#13309) mtmd : rename llava directory to mtmd (ggml-org#13311) clip : fix confused naming ffn_up and ffn_down (ggml-org#13290) convert : bailingmoe : set yarn metadata if present (ggml-org#13312) SYCL: Disable mul_mat kernels for noncontiguous tensor b (ggml-org#13308) mtmd : add C public API (ggml-org#13184) rpc : use backend registry, support dl backends (ggml-org#13304) ggml : activate s390x simd for Q3_K (ggml-org#13301) llava/mtmd : fixes to fully support dl backends (ggml-org#13303) llama : build windows releases with dl backends (ggml-org#13220) CUDA: fix race condition in MMQ stream-k fixup (ggml-org#13299) CUDA: fix race condition in MMQ ids_dst (ggml-org#13294) vulkan: Additional type support for unary, binary, and copy (ggml-org#13266) ...
)" This reverts commit 9070365.
Fixes #13305 .
The problem is that in
ggml_cuda_op_mul_matthe padding of temporary compute buffers is cleared just prior to kernel execution and when I wrote code to directly launch MMVQ and MMQ I forgot to add the corresponding functionality; this PR simply adds it. The changes in MMVQ should not actually be invoked unless the minimum number of tokens for evaluating the model on the GPU is lowered.Prior to #13199 the logic for clearing the padding was already wrong, but in a different way. Instead of not clearing padding when it ought to be, because slices of
src0were passed toggml_cuda_op_mul_matthat would cause valid tensor data to be cleared. I did not touch that logic, now thatMUL_MAT_IDcan be handled in a single kernel launch it should never be invoked (I also added an assert).